Chunking or not chunking? How do we find words in artificial language learning?

نویسندگان

  • Ana Franco
  • Arnaud Destrebecqz
چکیده

What is the nature of the representations acquired in implicit statistical learning? Recent results in the field of language learning have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data: Participants may either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. The two accounts are difficult to differentiate because they make similar predictions in comparable experimental settings. In this study, we present two experiments that aimed at contrasting these two theories. In these experiments, participants had to learn 2 sets of pseudo-linguistic regularities: Language 1 (L1) and Language 2 (L2) presented in the context of a serial reaction time task. L1 and L2 were either unrelated (none of the syllabic transitions of L1 were present in L2), or partly related (some of the intra-words transitions of L1 were used as inter-words transitions of L2). The two accounts make opposite predictions in these two settings. Our results indicate that the nature of the representations depends on the learning condition. When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Under Pressure: How Time-Limited Cognition Explains Statistical Learning by 8-Month Old Infants

In a classic experiment, Saffran, Aslin, and Newport (1996) used a headturn preference procedure to show that infants can discriminate between familiar syllable sequences (“words”) and new syllable sequences (“non-words” and “part-words”). While several computational models have simulated aspects of their data and proposed that the learning of transitional probabilities could be mediated by neu...

متن کامل

TRACX2: a RAAM-like autoencoder modeling graded chunking in infant visual-sequence learning

Even newborn infants are able to extract structure from a stream of sensory inputs and yet, how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights, and recognizing these chunks when they re-occur in th...

متن کامل

Word Representations: A Simple and General Method for Semi-Supervised Learning

If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word repre...

متن کامل

A Supervised Learning based Chunking in Thai using Categorial Grammar

One of the challenging problems in Thai NLP is to manage a problem on a syntactical analysis of a long sentence. This paper applies conditional random field and categorical grammar to develop a chunking method, which can group words into larger unit. Based on the experiment, we found the impressive results. We gain around 74.17% on sentence level chunking. Furthermore we got a more correct pars...

متن کامل

Chunking Models of Expertise: Implications for Education

Chunking models offer a parsimonious explanation of how people acquire knowledge and have been validated in domains such as expert behaviour and the acquisition of language. In this paper, we review two computational theories based on chunking mechanisms (the chunking theory and the template theory) and show what insight they offer for instruction and training. The suggested implications includ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 8  شماره 

صفحات  -

تاریخ انتشار 2012